Goto

Collaborating Authors

 ai ethics


Interview with Alice Xiang: Fair human-centric image dataset for ethical AI benchmarking

AIHub

Earlier this month, Sony AI released a dataset that establishes a new benchmark for AI ethics in computer vision models. The research behind the dataset, named Fair Human-Centric Image Benchmark (FHIBE), has been published in Nature . FHIBE is the first publicly-available, globally-diverse, consent-based human image dataset (inclusive of over 10,000 human images) for evaluating bias across a wide variety of computer vision tasks. We sat down with project lead, Alice Xiang, Global Head of AI Governance at Sony Group and Lead Research Scientist for AI Ethics at Sony AI, to discuss the project and the broader implications of this research. Could you start by introducing the project and taking us through some of the main contributions?


A Study on the Framework for Evaluating the Ethics and Trustworthiness of Generative AI

Jeong, Cheonsu, Lee, Seunghyun, Jeong, Seonhee, Kim, Sungsu

arXiv.org Artificial Intelligence

This study provides an in_depth analysis of the ethical and trustworthiness challenges emerging alongside the rapid advancement of generative artificial intelligence (AI) technologies and proposes a comprehensive framework for their systematic evaluation. While generative AI, such as ChatGPT, demonstrates remarkable innovative potential, it simultaneously raises ethical and social concerns, including bias, harmfulness, copyright infringement, privacy violations, and hallucination. Current AI evaluation methodologies, which mainly focus on performance and accuracy, are insufficient to address these multifaceted issues. Thus, this study emphasizes the need for new human_centered criteria that also reflect social impact. To this end, it identifies key dimensions for evaluating the ethics and trustworthiness of generative AI_fairness, transparency, accountability, safety, privacy, accuracy, consistency, robustness, explainability, copyright and intellectual property protection, and source traceability and develops detailed indicators and assessment methodologies for each. Moreover, it provides a comparative analysis of AI ethics policies and guidelines in South Korea, the United States, the European Union, and China, deriving key approaches and implications from each. The proposed framework applies across the AI lifecycle and integrates technical assessments with multidisciplinary perspectives, thereby offering practical means to identify and manage ethical risks in real_world contexts. Ultimately, the study establishes an academic foundation for the responsible advancement of generative AI and delivers actionable insights for policymakers, developers, users, and other stakeholders, supporting the positive societal contributions of AI technologies.


The Machine Ethics podcast: AI Ethics, Risks and Safety Conference 2025

AIHub

Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This is a special live panel episode we recorded at the AI Ethics, Risks and Safety Conference 2025 in Bristol, May 2025. This episode was a panel titled: Living with AI: the next five years hosted at the conference. For more information about the AI Ethics, Risks and Safety Conference go to Collective Intelligence's website. Thanks to Karin Rudolph and everyone who helped organise another great event this year.


Assessing the Ecological Impact of AI

Wenmackers, Sylvia

arXiv.org Artificial Intelligence

Philosophers of technology have recently started paying more attention to the environmental impacts of AI, in particular of large language models (LLMs) and generative AI (genAI) applications. Meanwhile, few developers of AI give concrete estimates of the ecological impact of their models and products, and even when they do so, their analysis is often limited to green house gas emissions of certain stages of AI development or use. The current proposal encourages practically viable analyses of the sustainability aspects of genAI informed by philosophical ideas.


To Post or Not to Post: AI Ethics in the Age of Big Tech

Communications of the ACM

What is the role of an ethicist? Is it to be an impartial observer? A guide to what is good or bad? Here, I will explore the different roles in the context of AI ethics through the terms descriptive, normative, and action AI ethics. AI ethics is a specific field of applied ethics nested in technology ethics and computer ethics.30


A Toolkit for Compliance, a Toolkit for Justice: Drawing on Cross-sectoral Expertise to Develop a Pro-justice EU AI Act Toolkit

Hollanek, Tomasz, Pi, Yulu, Fiorini, Cosimo, Vignali, Virginia, Peters, Dorian, Drage, Eleanor

arXiv.org Artificial Intelligence

The introduction of the AI Act in the European Union presents the AI research and practice community with a set of new challenges related to compliance. While it is certain that AI practitioners will require additional guidance and tools to meet these requirements, previous research on toolkits that aim to translate the theory of AI ethics into development and deployment practice suggests that such resources suffer from multiple limitations. These limitations stem, in part, from the fact that the toolkits are either produced by industry-based teams or by academics whose work tends to be abstract and divorced from the realities of industry. In this paper, we discuss the challenge of developing an AI ethics toolkit for practitioners that helps them comply with new AI-focused regulation, but that also moves beyond mere compliance to consider broader socio-ethical questions throughout development and deployment. The toolkit was created through a cross-sectoral collaboration between an academic team based in the UK and an industry team in Italy. We outline the background and rationale for creating a pro-justice AI Act compliance toolkit, detail the process undertaken to develop it, and describe the collaboration and negotiation efforts that shaped its creation. We aim for the described process to serve as a blueprint for other teams navigating the challenges of academia-industry partnerships and aspiring to produce usable and meaningful AI ethics resources.


A Participatory Strategy for AI Ethics in Education and Rehabilitation grounded in the Capability Approach

Cesaroni, Valeria, Pasqua, Eleonora, Bisconti, Piercosma, Galletti, Martina

arXiv.org Artificial Intelligence

AI-based technologies have significant potential to enhance inclusive education and clinical-rehabilitative contexts for children with Special Educational Needs and Disabilities. AI can enhance learning experiences, empower students, and support both teachers and rehabilitators. However, their usage presents challenges that require a systemic-ecological vision, ethical considerations, and participatory research. Therefore, research and technological development must be rooted in a strong ethical-theoretical framework. The Capability Approach - a theoretical model of disability, human vulnerability, and inclusion - offers a more relevant perspective on functionality, effectiveness, and technological adequacy in inclusive learning environments. In this paper, we propose a participatory research strategy with different stakeholders through a case study on the ARTIS Project, which develops an AI-enriched interface to support children with text comprehension difficulties. Our research strategy integrates ethical, educational, clinical, and technological expertise in designing and implementing AI-based technologies for children's learning environments through focus groups and collaborative design sessions. We believe that this holistic approach to AI adoption in education can help bridge the gap between technological innovation and ethical responsibility.


AI Ethics and Social Norms: Exploring ChatGPT's Capabilities From What to How

Veisi, Omid, Bahrami, Sasan, Englert, Roman, Müller, Claudia

arXiv.org Artificial Intelligence

Using LLMs in healthcare, Computer-Supported Cooperative Work, and Social Computing requires the examination of ethical and social norms to ensure safe incorporation into human life. We conducted a mixed-method study, including an online survey with 111 participants and an interview study with 38 experts, to investigate the AI ethics and social norms in ChatGPT as everyday life tools. This study aims to evaluate whether ChatGPT in an empirical context operates following ethics and social norms, which is critical for understanding actions in industrial and academic research and achieving machine ethics. The findings of this study provide initial insights into six important aspects of AI ethics, including bias, trustworthiness, security, toxicology, social norms, and ethical data. Significant obstacles related to transparency and bias in unsupervised data collection methods are identified as ChatGPT's ethical concerns.


e-person Architecture and Framework for Human-AI Co-adventure Relationship

Esaki, Kanako, Matsumura, Tadayuki, Shao, Yang, Mizuno, Hiroyuki

arXiv.org Artificial Intelligence

This paper proposes the e-person architecture for constructing a unified and incremental development of AI ethics. The e-person architecture takes the reduction of uncertainty through collaborative cognition and action with others as a unified basis for ethics. By classifying and defining uncertainty along two axes - (1) first, second, and third person perspectives, and (2) the difficulty of inference based on the depth of information - we support the development of unified and incremental development of AI ethics. In addition, we propose the e-person framework based on the free energy principle, which considers the reduction of uncertainty as a unifying principle of brain function, with the aim of implementing the e-person architecture, and we show our previous works and future challenges based on the proposed framework.


Three Kinds of AI Ethics

Ratti, Emanuele

arXiv.org Artificial Intelligence

There is an overwhelming abundance of works in AI Ethics. This growth is chaotic because of how sudden it is, its volume, and its multidisciplinary nature. This makes difficult to keep track of debates, and to systematically characterize goals, research questions, methods, and expertise required by AI ethicists. In this article, I show that the relation between AI and ethics can be characterized in at least three ways, which correspond to three well-represented kinds of AI ethics: ethics and AI; ethics in AI; ethics of AI. I elucidate the features of these three kinds of AI Ethics, characterize their research questions, and identify the kind of expertise that each kind needs. I also show how certain criticisms to AI ethics are misplaced, as being done from the point of view of one kind of AI ethics, to another kind with different goals. All in all, this work sheds light on the nature of AI ethics, and sets the groundwork for more informed discussions about the scope, methods, and training of AI ethicists.